Integrating Scalar Analyses and Optimizations in a Parallelizing and Optimizing Compiler
نویسنده
چکیده
منابع مشابه
The Limits of Alias Analysis for Scalar Optimizations
In theory, increasing alias analysis precision should improve compiler optimizations on C programs. This paper compares alias analysis algorithms on scalar optimizations, including an analysis that assumes no aliases, to establish a very loose upper bound on optimization opportunities. We then measure optimization opportunities on thirty-six C programs. In practice, the optimizations are rarely...
متن کاملA Structured Approach to Proving Compiler Optimizations Based on Dataflow Analysis
This paper reports on the correctness proof of compiler optimizations based on data-flow analysis. We formulate the optimizations and analyses as instances of a general framework for data-flow analyses and transformations, and prove that the optimizations preserve the behavior of the compiled programs. This development is a part of a larger effort of certifying an optimizing compiler by proving...
متن کاملThe Effect of Compiler Optimizations on Available Parallelism in Scalar Programs
In this paper we analyze the e ect of compiler optimizations on ne grain parallelism in scalar programs. We characterize three levels of optimization: classical, superscalar, and multiprocessor. We show that classical optimizations not only improve a program's e ciency but also its parallelism. Superscalar optimizations further improve the parallelism for moderately parallel machines. For highl...
متن کاملIntegrating Parallelizing Compilation Technology and Processor Architecture for Cost-Effective Concurrent multithreading
As the number of transistors on a single chip continues to grow, it is important to think beyond the traditional approaches of compiler optimizations for deeper pipelines and wider instruction issue units to improve performance. This single-threaded execution model limits these approaches to exploiting only the relatively small amount of instruction-level parallelism available in application pr...
متن کاملEnhancing Software DSM for Compiler-Parallelized Applications
Current parallelizing compilers for message-passing machines only support a limited class of data-parallel applications. One method for eliminating this restriction is to combine powerful shared-memory parallelizing compilers with software distributed-shared-memory (DSM) systems. We demonstrate such a system by combining the SUIF parallelizing compiler and the CVM software DSM. Innovations of t...
متن کامل